16 research outputs found

    Measuring Object Rotation via Visuo-Tactile Segmentation of Grasping Region

    Get PDF
    When carrying out robotic manipulation tasks, objects occasionally fall as a result of the rotation caused by slippage. This can be prevented by obtaining tactile information that provides better knowledge on the physical properties of the grasping. In this paper, we estimate the rotation angle of a grasped object when slippage occurs. We implement a system made up of a neural network with which to segment the contact region and an algorithm with which to estimate the rotated angle of that region. This method is applied to DIGIT tactile sensors. Our system has additionally been trained and tested with our publicly available dataset which is, to the best of our knowledge, the first dataset related to tactile segmentation from non-synthetic images to appear in the literature, and with which we have attained results of 95% and 90% as regards Dice and IoU metrics in the worst scenario. Moreover, we have obtained a maximum error of ≈ 3° when testing with objects not previously seen by our system in 45 different lifts. This, therefore, proved that our approach is able to detect the slippage movement, thus providing a possible reaction that will prevent the object from falling.This work was supported by the Ministry of Science and Innovation of the Spanish Government through the research project PID2021-122685OB-I00 and by the University of Alicante through the grant UAFPU21-26

    Detección basada en datos 3D de objetos urbanos

    Get PDF
    Este proyecto se basa en la detección 3D de objetos urbanos, concretamente en la comparación de dos redes neuronales incorporadas en un pipeline de detección. El objetivo de este pipeline es que la red sepa diferenciar entre objetos reales (personas, coches, motos, bicis) de objetos que los detectores 2D predicen como reales pero en realidad no lo son, como personas, coches, motos y bicis en anuncios, carteles publicitarios, reflejos... que se ven por la carretera

    Slippage Prediction from Segmentation of Tactile Images

    Get PDF
    El uso de sensores táctiles está comenzando a ser una práctica común en tareas complejas de manipulación robótica. Este tipo de sensores proporcionan información extra sobre las propiedades físicas de objetos que están siendo agarrados y/o manipulados. En este trabajo, se ha implementado un sistema capaz de medir el deslizamiento rotacional que pueden sufrir objetos durante su manipulación. Nuestra propuesta emplea sensores táctiles ópticos DIGIT, a partir de los cuáles se capturan imágenes de contacto que luego se procesan e interpretan. En concreto, nuestro método hace uso de un modelo neuronal para la detección de la región de contacto. Y posteriormente, mediante extracción de características visuales de la región detectada, se estima el ángulo causado por movimientos de deslizamiento. Nuestro método estima correctamente la región de contacto obteniendo un 95% y 91% usando las métricas Dice e IoU. Y es capaz de obtener un error medio máximo de 3º en agarres de objetos nunca vistos anteriormente.Using tactile sensors is becoming a common practice to achieve complex manipulation in robotic tasks. These kinds of sensors provide extra information about the physical properties of the grasping and/or manipulation task. In this work, we have implemented a system that is able to measure the rotational slippage of objects in hand. Our proposal uses the vision-based tactile sensors known as DIGITs which allow us to capture contact images, which are then processed. In particular, our method is based on a neural network model applied to the detection of touch/contact regions. Afterwards, we extract visual features from detected contact regions and we then estimate the angle generated due to an unwanted slippage. Our method obtains results of 95% and 91% in Dice and IoU metrics for contact estimation. In addition, it is able to obtain a mean rotational error of 3 degrees in the worst case with previously unseen objects.Este trabajo ha sido financiado por el Ministerio de Ciencia e Innovación a través del proyecto PID2021-122685OB-I00 y por la beca predoctoral UAFPU21-26 de la Universidad de Alicante

    Three-dimensional reconstruction using SFM for actual pedestrian classification

    Get PDF
    In recent years, the popularity of intelligent and autonomous vehicles has grown notably. In fact, there already exist commercial models with a high degree of autonomy as regards self-driving capabilities. A key feature for this kind of vehicle is object detection, which is commonly performed in 2D space. This has some inherent issues as an object and the depiction of such an object would be classified as the actual object, which is inadequate since urban environments are full of billboards, printed adverts and posters that would likely make these systems fail. In order to overcome this problem, a 3D sensor could be leveraged, although this would make the platform more expensive, energy inefficient and computationally complex. Thus, we propose the use of structure from motion to reconstruct the three-dimensional information of the scene from a set of images, and merge the 2D and 3D data to differentiate actual objects from depictions. As expected, our approach is able to work with a regular color camera. No 3D sensors whatsoever are required. As the experiments confirm, our approach is able to distinguish between actual pedestrians and depictions of them more than 87% of times in synthetic and real-world tests in the worst scenarios, while the accuracy is of almost 98% in the best case.This work was funded by a Spanish Government PID2019-104818RB-I00 grant, supported by Feder funds. It was also supported by Spanish grants for Ph.D. FPU16/00887. Experiments were made possible by a generous hardware donation from NVIDIA

    Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

    Get PDF
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project. The computer facilities were provided through the IDIFEFER/2020/003 project

    Fusión de datos 2D y 3D para detección de personas reales

    Get PDF
    Este proyecto consiste en la detección de personas reales y personas que aparecen en carteles publicitarios, marquesinas, reflejos, etc., aplicada a vehículos inteligentes. Para ello, se implementa un sistema que utiliza información 2D, imágenes, e información 3D, nubes de puntos reconstruidas. Fusionando ambos datos este sistema es capaz de obtener las reconstrucciones 3D de las personas reales o falsas (personas que aparecen en carteles, marquesinas, etc.)

    Rotational Slippage Prediction from Segmentation of Tactile Images

    Get PDF
    Paper submitted to the ViTac 2023: Blending Virtual and Real Visuo-Tactile Perception Workshop on IEEE International Conference on Robotics and Automation (ICRA), London, 29 May - 2 June 2023.Adding tactile sensors to a robotic system is becoming a common practice to achieve more complex manipulation skills than those robotics systems that only use external cameras to manipulate objects. The key of tactile sensors is that they provide extra information about the physical properties of the grasping. In this paper, we implemented a system to predict and quantify the rotational slippage of objects in hand using the vision-based tactile sensor known as Digit. Our system comprises a neural network that obtains the segmented contact region (object-sensor), to later calculate the slippage rotation angle from this region using a thinning algorithm. Besides, we created our own tactile segmentation dataset, which is the first one in the literature as far as we are concerned, to train and evaluate our neural network, obtaining results of 95% and 91% in Dice and IoU metrics. In real-scenario experiments, our system is able to predict rotational slippage with a maximum mean rotational error of 3 degrees with previously unseen objects. Thus, our system can be used to prevent an object from falling due to its slippage.This research was funded by the Valencian Regional Government through the PROMETEO/2021/075 project and by the University of Alicante through the grant UAFPU21-26

    Touch Detection with Low-cost Visual-based Sensor

    Get PDF
    Robotic manipulation continues being an unsolved problem. It involves many complex aspects, for example, perception tactile of different objects and materials, grasping control to plan the robotic hand pose, etc. Most of previous works on this topic used expensive sensors. This fact makes difficult the application in the industry. In this work, we propose a grip detection system using a low-cost visual-based tactile sensor known as DIGIT, mounted on a ROBOTIQ gripper 2F-140. We proved that a Deep Convolutional Network is able to detect contact or no contact. Capturing almost 12000 images with contact and no contact from different objects, we achieve 99% accuracy with never seen samples, in the best scenario. As a result, this system will allow us to implement a grasping controller for the gripper.Research work was completely funded by the European Commission and FEDER through the COMMANDIA project (SOE2/P1/F0638), supported by Interreg-V Sudoe. Computer facilities used were provided by Valencia Government and FEDER through the IDIFEDER/2020/003

    MOSPPA: monitoring system for palletised packaging recognition and tracking

    Get PDF
    The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, providing statistical data in real time with which to assist human operators in decision-making. We tested the precision-performance of the system at the Smurfit Kappa facilities. Our proposal attained an Average Precision (AP) of 0.93 at 14 Frames Per Second (FPS), losing only 1% of detections. Our system is, therefore, able to optimise and speed up the process of logistic distribution.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was partially funded by the private project (SMURFITKAPPA-21), supported by Smurfit Kappa Iberoamericana S.A. and University of Alicante

    Grasping detection of unknown objects with visual-tactile sensor

    No full text
    La manipulación robótica sigue siendo un problema no resuelto. Implica muchos aspectos complejos como la percepción táctil de una amplia variedad de objetos y materiales, control de agarre para planificar la postura de la mano robótica, etc. La mayoría de los trabajos anteriores sobre este tema han estado utilizando sensores caros. Este hecho dificulta la aplicación en la industria. En este trabajo, se propone un sistema de detección de agarre mediante un sensor táctil de tecnología de imagen y bajo coste, conocido como DIGIT. El método desarrollado basado en redes convolucionales profundas es capaz de detectar contacto o no contacto, con precisiones superiores al 95%. El sistema ha sido entrenado y testado con una base de datos propia de más de 16000 imágenes procedentes de agarres de diferentes objetos, empleando distintas unidades de DIGIT. El método de detección forma parte de un controlador de agarre para una pinza ROBOTIQ 2F-140.Robotic manipulation is still a challenge. It involves many complex aspects such as tactile perception of a wide variety of objects and materials, grip control to plan robotic hand posture, etc. Most of the previous work used expensive sensors for tactile perception tasks. This fact implies difficulty in transferring application results to industry. In this work, a grip detection system is proposed. It uses DIGIT sensors based on low-cost image technology. The method developed, which is based on deep Convolutional Neural Networks (CNN), is capable of detecting contact or non-contact, with success rates greater than 95 %. The system has been trained and tested on our own dataset, composed of more than 16,000 images from different object grasping, also using several DIGIT units. The detection method is part of a grip controller used with a ROBOTIQ 2F-140 gripper.Este trabajo ha sido financiado con FEDER a través del proyecto europeo COMMANDIA (SOE2/P1/F0638) de la convocatoria Interreg-V Sudoe. Además, se ha hecho uso de instalaciones de computación DGX-A100 adquiridas con una ayuda IDIFEDER/2020/003 del gobierno regional de la Generalitat Valenciana
    corecore